23 research outputs found

    Ethnicity- and gender-based subject retrieval using 3-D face-recognition techniques

    No full text
    While the retrieval of datasets from human subjects based on demographic characteristics such as gender or race is an ability with wide-ranging application, it remains poorly-studied. In contrast, a large body of work exists in the field of biometrics which has a different goal: the recognition of human subjects. Due to this disparity of interest, existing methods for retrieval based on demographic attributes tend to lag behind the more well-studied algorithms designed purely for face matching. The question this raises is whether a face recognition system could be leveraged to solve these other problems and, if so, how effective it could be. In the current work, we explore the limits of such a system for gender and ethnicity identification given (1) a ground truth of demographically-labeled, textureless 3-D models of human faces and (2) a state-of-the-art face-recognition algorithm. Once trained, our system is capable of classifying the gender and ethnicity of any such model of interest. Experiments are conducted on 4007 facial meshes from the benchmark Face Recognition Grand Challenge v2 dataset. © 2010 Springer Science+Business Media, LLC

    Unified 3D face and ear recognition using wavelets on geometry images

    No full text
    As the accuracy of biometrics improves, it is getting increasingly hard to push the limits using a single modality. In this paper, a unified approach that fuses three-dimensional facial and ear data is presented. An annotated deformable model is fitted to the data and a geometry image is extracted. Wavelet coefficients are computed from the geometry image and used as a biometric signature. The method is evaluated using the largest publicly available database and achieves 99.7% rank-one recognition rate. The state-of-the-art accuracy of the multimodal fusion is attributed to the low correlation between the individual differentiability of the two modalities. © 2007 Pattern Recognition Society

    Towards fast 3D ear recognition for real-life biometric applications

    No full text
    Three-dimensional data are increasingly being used for biometric purposes as they offer resilience to problems common in two-dimensional data. They have been successfully applied to face recognition and more recently to ear recognition. However, real-life biometric applications require algorithms that are both robust and efficient so that they scale well with the size of the databases. A novel ear recognition method is presented that uses a generic annotated ear model to register and fit each ear dataset. Then a compact biometric signature is extracted that retains 3D information. The proposed method is evaluated using the largest publicly available 3D ear database appended with our own database, resulting in a database containing data from multiple 3D sensor types. Using this database it is shown that the proposed method is not only robust, accurate and sensor invariant but also extremely efficient, thus making it suitable for real-life biometric applications

    Evaluation of the UR3D algorithm using the FRGC v2 data set

    No full text
    From a user's perspective, face recognition is one of the most desirable biometrics, due to its non-intrusive nature. As a large number of face recognition systems have been developed over the past 15 years, their evaluation and comparison by an independent body becomes crucial. Face Recognition Vendor Tests (FRVT) are organized at regular intervals by the National Institute of Standards and Technology (NIST) and cooperating institutions. The last FRVT (2002) showed that no system was capable of offering the accuracy desired for field deployment. Thus the Face Recognition Grand Challenge (FRGC) was set up with the aim to build systems with an order of magnitude higher accuracy. We have applied our previous work on deformable models and 3D representations to the problem of recognizing faces, based on 3D and infrared facial data. The 3D component of our system (UR3D) has been tested on the FRGC v2 dataset and the results are reported here. These results gave us a much needed insight on the performance of UR3D and pointed to areas of further improvement

    Three-dimensional face recognition in the presence of facial expressions: An annotated deformable model approach

    No full text
    In this paper, we present the computational tools and a hardware prototype for 3D face recognition. Full automation is provided through the use of advanced multistage alignment algorithms, resilience to facial expressions by employing a deformable model framework, and invariance to 3D capture devices through suitable preprocessing steps. In addition, scalability in both time and space is achieved by converting 3D facial scans into compact metadata. We present our results on the largest known, and now publicly available, Face Recognition Grand Challenge 3D facial database consisting of several thousand scans. To the best of our knowledge, this is the highest performance reported on the FRGC v2 database for the 3D modality. © 2007 IEEE

    Bidirectional relighting for 3D-aided 2D face recognition

    Get PDF
    In this paper, we present a new method for bidirectional relighting for 3D-aided 2D face recognition under large pose and illumination changes. During subject enrollment, we build subject-specific 3D annotated models by using the subjects' raw 3D data and 2D texture. During authentication, the probe 2D images are projected onto a normalized image space using the subject-specific 3D model in the gallery. Then, a bidirectional relighting algorithm and two similarity metrics (a view-dependent complex wavelet structural similarity and a global similarity) are employed to compare the gallery and probe. We tested our algorithms on the UHDB11 and UHDB12 databases that contain 3D data with probe images under large lighting and pose variations. The experimental results show the robustness of our approach in recognizing faces in difficult situations. ©2010 IEEE

    3D-2D face recognition with pose and illumination normalization

    No full text
    In this paper, we propose a 3D-2D framework for face recognition that is more practical than 3D-3D, yet more accurate than 2D-2D. For 3D-2D face recognition, the gallery data comprises of 3D shape and 2D texture data and the probes are arbitrary 2D images. A 3D-2D system (UR2D) is presented that is based on a 3D deformable face model that allows registration of 3D and 2D data, face alignment, and normalization of pose and illumination. During enrollment, subject-specific 3D models are constructed using 3D+2D data. For recognition, 2D images are represented in a normalized image space using the gallery 3D models and landmark-based 3D-2D projection estimation. A method for bidirectional relighting is applied for non-linear, local illumination normalization between probe and gallery textures, and a global orientation-based correlation metric is used for pairwise similarity scoring. The generated, personalized, pose- and light- normalized signatures can be used for one-to-one verification or one-to-many identification. Results for 3D-2D face recognition on the UHDB11 3D-2D database with 2D images under large illumination and pose variations support our hypothesis that, in challenging datasets, 3D-2D outperforms 2D-2D and decreases the performance gap against 3D-3D face recognition. Evaluations on FRGC v2.0 3D-2D data with frontal facial images, demonstrate that the method can generalize to databases with different and diverse illumination conditions. © 201

    Multimodal Face Recognition: Combination of Geometry with . . .

    No full text
    It is becoming increasingly important to be able to credential and identify authorized personnel at key points of entry. Such identity management systems commonly employ biometric identifiers. In this paper, we present a novel multimodal facial recognition approach that employs data from both visible spectrum and thermal infrared sensors. Data from multiple cameras is used to construct a threedimensional mesh representing the face and a facial thermal texture map. An annotated face model with explicit two-dimensional parameterization (UV) is then fitted to this data to construct: 1) a three-channel UV deformation image encoding geometry, and 2) a one-channel UV vasculature image encoding facial vasculature. Recognition is accomplished by comparing: 1) the parametric deformation images, 2) the parametric vasculature images, and 3) the visible spectrum texture maps. The novelty of our work lies in the use of deformation images and physiological information as means for comparison. We have performed extensive tests on the Face Recognition Grand Challenge v1.0 dataset and on our own multimodal database with very encouraging results

    Bidirectional relighting for 3D-aided 2D Face Recognition

    No full text
    In this paper, we present a new method for bidirectional relighting for 3D-aided 2D face recognition under large pose and illumination changes. During subject enrollment, we build subject-specific 3D annotated models by using the subjects’ raw 3D data and 2D texture. During authentication, the probe 2D images are projected onto a normalized image space using the subject-specific 3D model in the gallery. Then, a bidirectional relighting algorithm and two similarity metrics (a view-dependent complex wavelet structural similarity and a global similarity) are employed to compare the gallery and probe. We tested our algorithms on the UHDB11 and UHDB12 databases that contain 3D data with probe images under large lighting and pose variations. The experimental results show the robustness of our approach in recognizing faces in difficult situations
    corecore